Search Results: "Francois Marier"

7 January 2016

Francois Marier: Streamzap remotes and evdev in MythTV

Modern versions of Linux and MythTV enable infrared remote controls without the need for lirc. Here's how I migrated my Streamzap remote to evdev.

Installing packages In order to avoid conflicts between evdev and lirc, I started by removing lirc and its config:
apt purge lirc
and then I installed this tool:
apt install ir-keytable

Remapping keys While my Streamzap remote works out of the box with kernel 3.16, the keycodes that it sends to Xorg are not the ones that MythTV expects. I therefore copied the existing mapping:
cp /lib/udev/rc_keymaps/streamzap /home/mythtv/
and changed it to this:
0x28c0 KEY_0
0x28c1 KEY_1
0x28c2 KEY_2
0x28c3 KEY_3
0x28c4 KEY_4
0x28c5 KEY_5
0x28c6 KEY_6
0x28c7 KEY_7
0x28c8 KEY_8
0x28c9 KEY_9
0x28ca KEY_ESC
0x28cb KEY_MUTE #  
0x28cc KEY_UP
0x28cd KEY_RIGHTBRACE
0x28ce KEY_DOWN
0x28cf KEY_LEFTBRACE
0x28d0 KEY_UP
0x28d1 KEY_LEFT
0x28d2 KEY_ENTER
0x28d3 KEY_RIGHT
0x28d4 KEY_DOWN
0x28d5 KEY_M
0x28d6 KEY_ESC
0x28d7 KEY_L
0x28d8 KEY_P
0x28d9 KEY_ESC
0x28da KEY_BACK # <
0x28db KEY_FORWARD # >
0x28dc KEY_R
0x28dd KEY_PAGEUP
0x28de KEY_PAGEDOWN
0x28e0 KEY_D
0x28e1 KEY_I
0x28e2 KEY_END
0x28e3 KEY_A
The complete list of all EV_KEY keycodes can be found in the kernel. The following command will write this mapping to the driver:
/usr/bin/ir-keytable w /home/mythtv/streamzap -d /dev/input/by-id/usb-Streamzap__Inc._Streamzap_Remote_Control-event-if00
and they should take effect once MythTV is restarted.

Applying the mapping at boot While the na ve solution is to apply the mapping at boot (for example, by sticking it in /etc/rc.local), that only works if the right modules are loaded before rc.local runs. A much better solution is to write a udev rule so that the mapping is written after the driver is loaded. I created /etc/udev/rules.d/streamzap.rules with the following:
# Configure remote control for MythTV
# https://www.mythtv.org/wiki/User_Manual:IR_control_via_evdev#Modify_key_codes
ACTION=="add", ATTRS idVendor =="0e9c", ATTRS idProduct =="0000", RUN+="/usr/bin/ir-keytable -c -w /home/mythtv/streamzap -D 1000 -P 250 -d /dev/input/by-id/usb-Streamzap__Inc._Streamzap_Remote_Control-event-if00"
and got the vendor and product IDs using:
grep '^[IN]:' /proc/bus/input/devices
The -D and -P parameters control what happens when a button on the remote is held down and the keypress must be repeated. These delays are in milliseconds.

30 December 2015

Francois Marier: Linux kernel module options on Debian

Linux kernel modules often have options that can be set. Here's how to make use of them on Debian-based systems, using the i915 Intel graphics driver as an example. To get the list of all available options:
modinfo -p i915
To check the current value of a particular option:
cat /sys/module/i915/parameters/enable_ppgtt
To give that option a value when the module is loaded, create a new /etc/modprobe.d/i915.conf file and put the following in it:
options i915 enable_ppgtt=0
and then re-generate the initial RAM disks:
update-initramfs -u -k all
Alternatively, that option can be set at boot time on the kernel command line by setting the following in /etc/default/grub:
GRUB_CMDLINE_LINUX="i915.enable_ppgtt=0"
and then updating the grub config:
update-grub2

7 December 2015

Francois Marier: Tweaking Cookies For Privacy in Firefox

Cookies are an important part of the Web since they are the primary mechanism that websites use to maintain user sessions. Unfortunately, they are also abused by surveillance marketing companies to follow you around the Web. Here are a few things you can do in Firefox to protect your privacy.

Cookie Expiry Cookies are sent from the website to your browser via a Set-Cookie HTTP header on the response. It looks like this:
HTTP/1.1 200 OK
Date: Mon, 07 Dec 2015 16:55:43 GMT
Server: Apache
Set-Cookie: SESSIONID=65576c6c64206e6f2c657920756f632061726b636465742065686320646f2165
Content-Length: 2036
Content-Type: text/html;charset=UTF-8
When your browser sees this, it saves that cookie for the given hostname and keeps it until you close the browser. Should a site want to persist their cookie for longer, they can add an Expires attribute:
Set-Cookie: SESSIONID=65576c...; expires=Tue, 06-Dec-2016 22:38:26 GMT
in which case the browser will retain the cookie until the server-provided expiry date (which could be in a few years). Of course, that's if you don't instruct your browser to do things differently.

Third-Party Cookies So far, we've only looked at first-party cookies: the ones set by the website you visit and which are typically used to synchronize your login state with the server. There is however another kind: third-party cookies. These ones are set by the third-party resources that a page loads. For example, if a page loads JavaScript from a third-party ad network, you can be pretty confident that they will set their own cookie in order to build a profile on you and serve you "better and more relevant ads".

Controlling Third-Party Cookies If you'd like to opt out of these, you have a couple of options. The first one is to turn off third-party cookies entirely by going back into the Privacy preferences and selecting "Never" next to the "Accept third-party cookies" setting (network.cookie.cookieBehavior = 1). Unfortunately, turning off third-party cookies entirely tends to break a number of sites which rely on this functionality (for example as part of their for login process). A more forgiving option is to accept third-party cookies only for sites which you have actually visited directly. For example, if you visit Facebook and login, you will get a cookie from them. Then when you visit other sites which include Facebook widgets they will not recognize you unless you allow cookies to be sent in a third-party context. To do that, choose the "From visited" option (network.cookie.cookieBehavior = 3). In addition to this setting, you can also choose to make all third-party cookies automatically expire when you close Firefox by setting the network.cookie.thirdparty.sessionOnly option to true in about:config.

Other Ways to Limit Third-Party Cookies Another way to limit undesirable third-party cookies is to tell the browser to avoid connecting to trackers in the first place. This functionality is now built into Private Browsing mode and enabled by default. To enable it outside of Private Browsing too, simply go into about:config and set privacy.trackingprotection.enabled to true. You could also install the EFF's Privacy Badger add-on which uses heuristics to detect and block trackers, unlike Firefox tracking protection which uses a blocklist of known trackers.

My Recommended Settings On my work computer I currently use the following:
network.cookie.cookieBehavior = 3
network.cookie.lifetimePolicy = 3
network.cookie.lifetime.days = 5
network.cookie.thirdparty.sessionOnly = true
privacy.trackingprotection.enabled = true
which allows me to stay logged into most sites for the whole week (no matter now often I restart Firefox Nightly) while limiting tracking and other undesirable cookies as much as possible.

13 November 2015

Francois Marier: How Tracking Protection works in Firefox

Firefox 42, which was released last week, introduced a new feature in its Private Browsing mode: tracking protection. If you are interested in how this list is put together and then used in Firefox, this post is for you.

Safe Browsing lists There are many possible ways to download URL lists to the browser and check against that list before loading anything. One of those is already implemented as part of our malware and phishing protection. It uses the Safe Browsing v2.2 protocol. In a nutshell, the way that this works is that each URL on the block list is hashed (using SHA-256) and then that list of hashes is downloaded by Firefox and stored into a data structure on disk:
  • ~/.cache/mozilla/firefox/XXXX/safebrowsing/mozstd-track* on Linux
  • ~/Library/Caches/Firefox/Profiles/XXXX/safebrowsing/mozstd-track* on Mac
  • C:\Users\XXXX\AppData\Local\mozilla\firefox\profiles\XXXX\safebrowsing\mozstd-track* on Windows
This sbdbdump script can be used to extract the hashes contained in these files and will output something like this:
$ ~/sbdbdump/dump.py -v .
- Reading sbstore: mozstd-track-digest256
[mozstd-track-digest256] magic 1231AF3B Version 3 NumAddChunk: 1 NumSubChunk: 0 NumAddPrefix: 0 NumSubPrefix: 0 NumAddComplete: 1696 NumSubComplete: 0
[mozstd-track-digest256] AddChunks: 1445465225
[mozstd-track-digest256] SubChunks:
...
[mozstd-track-digest256] addComplete[chunk:1445465225] e48768b0ce59561e5bc141a52061dd45524e75b66cad7d59dd92e4307625bdc5
...
[mozstd-track-digest256] MD5: 81a8becb0903de19351427b24921a772
The name of the blocklist being dumped here (mozstd-track-digest256) is set in the urlclassifier.trackingTable preference which you can find in about:config. The most important part of the output shown above is the addComplete line which contains a hash that we will see again in a later section.

List lookups Once it's time to load a resource, Firefox hashes the URL, as well as a few variations of it, and then looks for it in the local lists. If there's no match, then the load proceeds. If there's a match, then we do an additional check against a pairwise allowlist. The pairwise allowlist (hardcoded in the urlclassifier.trackingWhitelistTable pref) is designed to encode what we call "entity relationships". The list groups related domains together for the purpose of checking whether a load is first or third party (e.g. twitter.com and twimg.com both belong to the same entity). Entries on this list (named mozstd-trackwhite-digest256) look like this:
twitter.com/?resource=twimg.com
which translates to "if you're on the twitter.com site, then don't block resources from twimg.com. If there's a match on the second list, we don't block the load. It's only when we get a match on the first list and not the second one that we go ahead and cancel the network load. If you visit our test page, you will see tracking protection in action with a shield icon in the URL bar. Opening the developer tool console will expose the URL of the resource that was blocked:
The resource at "https://trackertest.org/tracker.js" was blocked because tracking protection is enabled.

Creating the lists The blocklist is created by Disconnect according to their definition of tracking. The Disconnect list is on their Github page, but the copy we use in Firefox is the copy we have in our own repository. Similarly the Disconnect entity list is from here but our copy is in our repository. Should you wish to be notified of any changes to the lists, you can simply subscribe to this Atom feed. To convert this JSON-formatted list into the binary format needed by the Safe Browsing code, we run a custom list generation script whenever the list changes on GitHub. If you run that script locally using the same configuration as our server stack, you can see the conversion from the original list to the binary hashes. Here's a sample entry from the mozstd-track-digest256.log file:
[m] twimg.com >> twimg.com/
[canonicalized] twimg.com/
[hash] e48768b0ce59561e5bc141a52061dd45524e75b66cad7d59dd92e4307625bdc5
and one from mozstd-trackwhite-digest256.log:
[entity] Twitter >> (canonicalized) twitter.com/?resource=twimg.com, hash a8e9e3456f46dbe49551c7da3860f64393d8f9d96f42b5ae86927722467577df
This in combination with the sbdbdump script mentioned earlier, will allow you to audit the contents of the local lists.

Serving the lists The way that the binary lists are served to Firefox is through a custom server component written by Mozilla: shavar. Every hour, Firefox requests updates from shavar.services.mozilla.com. If new data is available, then the whole list is downloaded again. Otherwise, all it receives in return is an empty 204 response. Should you want to play with it and run your own server, follow the installation instructions and then go into about:config to change these preferences to point to your own instance:
browser.trackingprotection.gethashURL
browser.trackingprotection.updateURL
Note that on Firefox 43 and later, these prefs have been renamed to:
browser.safebrowsing.provider.mozilla.gethashURL
browser.safebrowsing.provider.mozilla.updateURL

Learn more If you want to learn more about how tracking protection works in Firefox, you can find all of the technical details on the Mozilla wiki or you can ask questions on our mailing list. Thanks to Tanvi Vyas for reviewing a draft of this post.

17 October 2015

Francois Marier: Introducing reboot-notifier for jessie and stretch

One of the packages that got lost in the transition from Debian wheezy to jessie was the update-notifier-common package which could be used to receive notifications when a reboot is needed (for example, after installing a kernel update). I decided to wrap this piece of functionality along with a simple cron job and create a new package: reboot-notifier. Because it uses the same file (/var/run/reboot-required) to indicate that a reboot is needed, it should work fine with any custom scripts that admins might have written prior to jessie. If you're running sid or strech, all you need to do is:
apt install reboot-notifier
On jessie, you'll need to add the backports repository to /etc/apt/sources.list:
deb http://httpredir.debian.org/debian jessie-backports main

19 September 2015

Francois Marier: Hooking into docking and undocking events to run scripts

In order to automatically update my monitor setup and activate/deactivate my external monitor when plugging my ThinkPad into its dock, I found a way to hook into the ACPI events and run arbitrary scripts. This was tested on a T420 with a ThinkPad Dock Series 3 as well as a T440p with a ThinkPad Ultra Dock. The only requirement is the ThinkPad ACPI kernel module which you can find in the tp-smapi-dkms package in Debian. That's what generates the ibm/hotkey events we will listen for.

Hooking into the events Create the following ACPI event scripts as suggested in this guide. Firstly, /etc/acpi/events/thinkpad-dock:
event=ibm/hotkey LEN0068:00 00000080 00004010
action=su francois -c "/home/francois/bin/external-monitor dock"
Secondly, /etc/acpi/events/thinkpad-undock:
event=ibm/hotkey LEN0068:00 00000080 00004011
action=su francois -c "/home/francois/bin/external-monitor undock"
then restart udev:
sudo service udev restart

Finding the right events To make sure the events are the right ones, lift them off of:
sudo acpi_listen
and ensure that your script is actually running by adding:
logger "ACPI event: $*"
at the begininng of it and then looking in /var/log/syslog for this lines like:
logger: external-monitor undock
logger: external-monitor dock
If that doesn't work for some reason, try using an ACPI event script like this:
event=ibm/hotkey
action=logger %e
to see which event you should hook into.

Using xrandr inside an ACPI event script Because the script will be running outside of your user session, the xrandr calls must explicitly set the display variable (-d). This is what I used:
#!/bin/sh
logger "ACPI event: $*"
xrandr -d :0.0 --output DP2 --auto
xrandr -d :0.0 --output eDP1 --auto
xrandr -d :0.0 --output DP2 --left-of eDP1

14 September 2015

Francois Marier: Setting up a network scanner using SANE

Sharing a scanner over the network using SANE is fairly straightforward. Here's how I shared a scanner on a server (running Debian jessie) with a client (running Ubuntu trusty).

Install SANE The packages you need on both the client and the server are: You should check whether or your scanner is supported by the latest stable release or by the latest development version. In my case, I needed to get a Canon LiDE 220 working so I had to grab the libsane 1.0.25+git20150528-1 package from Debian experimental.

Test the scanner locally Once you have SANE installed, you can test it out locally to confirm that it detects your scanner:
scanimage -L
This should give you output similar to this:
device  genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner
If that doesn't work, make sure that the scanner is actually detected by the USB stack:
$ lsusb   grep Canon
Bus 001 Device 006: ID 04a9:190f Canon, Inc.
and that its USB ID shows up in the SANE backend it needs:
$ grep 190f /etc/sane.d/genesys.conf 
usb 0x04a9 0x190f
To do a test scan, simply run:
scanimage > test.ppm
and then take a look at the (greyscale) image it produced (test.ppm).

Configure the server With the scanner working locally, it's time to expose it to network clients by adding the client IP addresses to /etc/sane.d/saned.conf:
## Access list
192.168.1.3
and then opening the appropriate port on your firewall (typically /etc/network/iptables in Debian):
-A INPUT -s 192.168.1.3 -p tcp --dport 6566 -j ACCEPT
Then you need to ensure that the SANE server is running by setting the following in /etc/default/saned:
RUN=yes
if you're using the sysv init system, or by running this command:
systemctl enable saned.socket
if using systemd. I actually had to reboot to make saned visible to systemd, so if you still run into these errors:
$ service saned start
Failed to start saned.service: Unit saned.service is masked.
you're probably just one reboot away from getting it to work.

Configure the client On the client, all you need to do is add the following to /etc/sane.d/net.conf:
connect_timeout = 60
myserver
where myserver is the hostname or IP address of the server running saned.

Test the scanner remotely With everything in place, you should be able to see the scanner from the client computer:
$ scanimage -L
device  net:myserver:genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner
and successfully perform a test scan using this command:
scanimage > test.ppm

29 August 2015

Francois Marier: Letting someone ssh into your laptop using Pagekite

In order to investigate a bug I was running into, I recently had to give my colleague ssh access to my laptop behind a firewall. The easiest way I found to do this was to create an account for him on my laptop and setup a pagekite frontend on my Linode server and a pagekite backend on my laptop.

Frontend setup Setting up my Linode server in order to make the ssh service accessible and proxy the traffic to my laptop was fairly straightforward. First, I had to install the pagekite package (already in Debian and Ubuntu) and open up a port on my firewall by adding the following to both /etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules:
-A INPUT -p tcp --dport 10022 -j ACCEPT
Then I created a new CNAME for my server in DNS:
pagekite.fmarier.org.   3600    IN  CNAME   fmarier.org.
With that in place, I started the pagekite frontend using this command:
pagekite --clean --isfrontend --rawports=virtual --ports=10022 --domain=raw:pagekite.fmarier.org:Password1

Backend setup After installing the pagekite and openssh-server packages on my laptop and creating a new user account:
adduser roc
I used this command to connect my laptop to the pagekite frontend:
pagekite --clean --frontend=pagekite.fmarier.org:10022 --service_on=raw/22:pagekite.fmarier.org:localhost:22:Password1

Client setup Finally, my colleague needed to add the folowing entry to ~/.ssh/config:
Host pagekite.fmarier.org
  CheckHostIP no
  ProxyCommand /bin/nc -X connect -x %h:10022 %h %p
and install the netcat-openbsd package since other versions of netcat don't work. On Fedora, we used netcat-openbsd-1.89 successfully, but this newer package may also work. He was then able to ssh into my laptop via ssh roc@pagekite.fmarier.org.

Making settings permanent I was quite happy settings things up temporarily on the command-line, but it's also possible to persist these settings and to make both the pagekite frontend and backend start up automatically at boot. See the documentation for how to do this on Debian and Fedora.

1 August 2015

Francois Marier: Setting the wifi regulatory domain on Linux and OpenWRT

The list of available wifi channels is slightly different from country to country. To ensure access to the right channels and transmit power settings, one needs to set the right regulatory domain in the wifi stack.

Linux For most Linux-based computers, you can look and change the current regulatory domain using these commands:
iw reg get
iw reg set CA
where CA is the two-letter country code when the device is located. On Debian and Ubuntu, you can make this setting permanent by putting the country code in /etc/default/crda. Finally, to see the list of channels that are available in the current config, use:
iwlist wlan0 frequency

OpenWRT On OpenWRT-based routers (including derivatives like Gargoyle), looking and setting the regulatory domain temporarily works the same way (i.e. the iw commands above). In order to persist your changes though, you need to use the uci command:
uci set wireless.radio0.country=CA
uci set wireless.radio1.country=CA
uci commit wireless
where wireless.radio0 and wireless.radio1 are the wireless devices specific to your router. You can look them up using:
uci show wireless
To test that it worked, simply reboot the router and then look at the selected regulatory domain:
iw reg get

Scanning the local wifi environment Once your devices are set to the right country, you should scan the local environment to pick the least congested wifi channel. You can use the Kismet spectools (free software) if you have the hardware, otherwise WifiAnalyzer (proprietary) is a good choice on Android (remember to manually set the available channels in the settings).

8 June 2015

Craig Small: Checking Cloudflare SSL

My website for a while has used CloudFlare as its front-end. It s a rather nice setup and means my real server gets less of a hammering, which is a good thing. A few months ago they enabled a feature called Universal SSL which I have also added to my site. Around the same time, my SSL check scripts started failing for the website, the certificate had expired apparently many many days ago. Something wasn t right. The Problem The problem was simply at first I d get emails saying The SSL certificate for enc.com.au (CN: ) has expired! . I use a program called ssl-cert-check that would check all (web, smtp, imap) of my certificates. It s very easy to forget to renew and this program runs daily and does a simple check. Running the program on the command line gave some more information, but nothing (for me) that really helped:
$ /usr/bin/ssl-cert-check -s enc.com.au -p 443
Host Status Expires Days
----------------------------------------------- ------------ ------------ ----
unable to load certificate
140364897941136:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: TRUSTED CERTIFICATE
unable to load certificate
139905089558160:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: TRUSTED CERTIFICATE
unable to load certificate
140017829234320:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: TRUSTED CERTIFICATE
unable to load certificate
140567473276560:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: TRUSTED CERTIFICATE
enc.com.au:443 Expired -2457182
So, apparently, there was something wrong with the certificate. The problem was this was CloudFlare who seem to have a good idea on how to handle certificates and all my browsers were happy. ssl-cert-check is a shell script that uses openssl to make the connection, so the next stop was to see what openssl had to say.
$ echo ""   /usr/bin/openssl s_client -connect enc.com.au:443 CONNECTED(00000003)
140115756086928:error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error:s23_clnt.c:769:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 345 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
---
No peer certificate available. That was the clue I was looking for. Where s my Certificate? CloudFlare Universal SSL uses certificates that have multiple domains in the one certificate. The do this by having one canonical name which is something like sni(numbers).cloudflaressl.com and then multiple Subject Alternative Names (a bit like ServerAlias in apache configurations). This way a single server with a single certificate can serve multiple domains. The way that the client tells the server which website it is looking for is Server Name Indication (SNI). As part of the TLS handshaking the client tells the server I want website www.enc.com.au . The thing is, by default, both openssl s_client and the check script do not use this feature. That was fail the SSL certificate checks were failing. The server was waiting for the client to ask what website it wanted. Modern browsers do this automatically so it just works for them. The Fix For openssl on the command line, there is a flag -servername which does the trick nicely:
$ echo ""   /usr/bin/openssl s_client -connect enc.com.au:443 -servername enc.com.au 
CONNECTED(00000003)
depth=2 C = GB, ST = Greater Manchester, L = Salford, O = COMODO CA Limited, CN = COMODO ECC Certification Authority
verify error:num=20:unable to get local issuer certificate
---
(lots of good SSL type messages)
That was openssl happy now. We asked the server what website we were interested in with the -servername and got the certificate. The fix for ssl-cert-check is even simpler. Like a lot of things once you know the problem, the solution is not only easy to work out but someone has done it for you already. There is a Debian bug report on this problem with a simple fix from Francois Marier. Just edit the check script and change the line that has:
 TLSSERVERNAME="FALSE"
and change it to true. Then the script is happy too:
$ ssl-cert-check -s enc.com.au -p https
Host Status Expires Days
----------------------------------------------- ------------ ------------ ----
enc.com.au:https Valid Sep 30 2015 114
All working and as expected! This isn t really a CloudFlare problem as such, it is just that s the first place I had seen these sort of SNI certificates being used in something I administer (or more correctly something behind the something).

23 May 2015

Francois Marier: Usual Debian Server Setup

I manage a few servers for myself, friends and family as well as for the Libravatar project. Here is how I customize recent releases of Debian on those servers.

Hardware tests
apt-get install memtest86+ smartmontools e2fsprogs
Prior to spending any time configuring a new physical server, I like to ensure that the hardware is fine. To check memory, I boot into memtest86+ from the grub menu and let it run overnight. Then I check the hard drives using:
smartctl -t long /dev/sdX
badblocks -swo badblocks.out /dev/sdX

Configuration
apt-get install etckeepr git sudo vim
To keep track of the configuration changes I make in /etc/, I use etckeeper to keep that directory in a git repository and make the following changes to the default /etc/etckeeper/etckeeper.conf:
  • turn off daily auto-commits
  • turn off auto-commits before package installs
To get more control over the various packages I install, I change the default debconf level to medium:
dpkg-reconfigure debconf
Since I use vim for all of my configuration file editing, I make it the default editor:
update-alternatives --config editor

ssh
apt-get install openssh-server mosh fail2ban
Since most of my servers are set to UTC time, I like to use my local timezone when sshing into them. Looking at file timestamps is much less confusing that way. I also ensure that the locale I use is available on the server by adding it the list of generated locales:
dpkg-reconfigure locales
Other than that, I harden the ssh configuration and end up with the following settings in /etc/ssh/sshd_config (jessie):
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com
UsePrivilegeSeparation sandbox
AuthenticationMethods publickey
PasswordAuthentication no
PermitRootLogin no
AcceptEnv LANG LC_* TZ
LogLevel VERBOSE
AllowGroups sshuser
or the following for wheezy servers:
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256
On those servers where I need duplicity/paramiko to work, I also add the following:
KexAlgorithms ...,diffie-hellman-group-exchange-sha1
MACs ...,hmac-sha1
Then I remove the "Accepted" filter in /etc/logcheck/ignore.d.server/ssh (first line) to get a notification whenever anybody successfully logs into my server. I also create a new group and add the users that need ssh access to it:
addgroup sshuser
adduser francois sshuser
and add a timeout for root sessions by putting this in /root/.bash_profile:
TMOUT=600

Security checks
apt-get install logcheck logcheck-database fcheck tiger debsums corekeeper
apt-get remove john john-data rpcbind tripwire
Logcheck is the main tool I use to keep an eye on log files, which is why I add a few additional log files to the default list in /etc/logcheck/logcheck.logfiles:
/var/log/apache2/error.log
/var/log/mail.err
/var/log/mail.warn
/var/log/mail.info
/var/log/fail2ban.log
while ensuring that the apache logfiles are readable by logcheck:
chmod a+rx /var/log/apache2
chmod a+r /var/log/apache2/*
and fixing the log rotation configuration by adding the following to /etc/logrotate.d/apache2:
create 644 root adm
I also modify the main logcheck configuration file (/etc/logcheck/logcheck.conf):
INTRO=0
FQDN=0
Other than that, I enable daily checks in /etc/default/debsums and customize a few tiger settings in /etc/tiger/tigerrc:
Tiger_Check_RUNPROC=Y
Tiger_Check_DELETED=Y
Tiger_Check_APACHE=Y
Tiger_FSScan_WDIR=Y
Tiger_SSH_Protocol='2'
Tiger_Passwd_Hashes='sha512'
Tiger_Running_Procs='rsyslogd cron atd /usr/sbin/apache2 postgres'
Tiger_Listening_ValidProcs='sshd mosh-server ntpd'

General hardening
apt-get install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra
While the harden packages are configuration-free, AppArmor must be manually enabled:
perl -pi -e 's,GRUB_CMDLINE_LINUX="(.*)"$,GRUB_CMDLINE_LINUX="$1 apparmor=1 security=apparmor",' /etc/default/grub
update-grub

Entropy and timekeeping
apt-get install haveged rng-tools ntp
To keep the system clock accurate and increase the amount of entropy available to the server, I install the above packages and add the tpm_rng module to /etc/modules.

Preventing mistakes
apt-get install molly-guard safe-rm sl
The above packages are all about catching mistakes (such as accidental deletions). However, in order to extend the molly-guard protection to mosh sessions, one needs to manually apply a patch.

Package updates
apt-get install apticron unattended-upgrades deborphan debfoster apt-listchanges update-notifier-common aptitude popularity-contest
These tools help me keep packages up to date and remove unnecessary or obsolete packages from servers. On Rackspace servers, a small configuration change is needed to automatically update the monitoring tools. In addition to this, I use the update-notifier-common package along with the following cronjob in /etc/cron.daily/reboot-required:
#!/bin/sh
cat /var/run/reboot-required 2> /dev/null   true
to send me a notification whenever a kernel update requires a reboot to take effect.

Handy utilities
apt-get install renameutils atool iotop sysstat lsof mtr-tiny
Most of these tools are configure-free, except for sysstat, which requires enabling data collection in /etc/default/sysstat to be useful.

Apache configuration
apt-get install apache2-mpm-event
While configuring apache is often specific to each server and the services that will be running on it, there are a few common changes I make. I enable these in /etc/apache2/conf.d/security:
<Directory />
    AllowOverride None
    Order Deny,Allow
    Deny from all
</Directory>
ServerTokens Prod
ServerSignature Off
and remove cgi-bin directives from /etc/apache2/sites-enabled/000-default. I also create a new /etc/apache2/conf.d/servername which contains:
ServerName machine_hostname

Mail
apt-get install postfix
Configuring mail properly is tricky but the following has worked for me. In /etc/hostname, put the bare hostname (no domain), but in /etc/mailname put the fully qualified hostname. Change the following in /etc/postfix/main.cf:
inet_interfaces = loopback-only
myhostname = (fully qualified hostname)
smtp_tls_security_level = may
smtp_tls_protocols = !SSLv2, !SSLv3
Set the following aliases in /etc/aliases:
  • set francois as the destination of root emails
  • set an external email address for francois
  • set root as the destination for www-data emails
before running newaliases to update the aliases database. Create a new cronjob (/etc/cron.hourly/checkmail):
#!/bin/sh
ls /var/mail
to ensure that email doesn't accumulate unmonitored on this box. Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then test the whole setup using mail root.

Network tuning To reduce the server's contribution to bufferbloat I change the default kernel queueing discipline (jessie or later) by putting the following in /etc/sysctl.conf:
net.core.default_qdisc=fq_codel

3 April 2015

Francois Marier: Using OpenVPN on Android Lollipop

I use my Linode VPS as a VPN endpoint for my laptop when I'm using untrusted networks and I wanted to do the same on my Android 5 (Lollipop) phone. It turns out that it's quite easy to do (doesn't require rooting your phone) and that it works very well.

Install OpenVPN Once you have installed and configured OpenVPN on the server, you need to install the OpenVPN app for Android (available both on F-Droid and Google Play). From the easy-rsa directory you created while generating the server keys, create a new keypair for your phone:
./build-key nexus6        # "nexus6" as Name, no password
and then copy the following files onto your phone:
  • ca.crt
  • nexus6.crt
  • nexus6.key
  • ta.key

Create a new VPN config If you configured your server as per my instructions, these are the settings you'll need to use on your phone: Basic:
  • LZO Compression: YES
  • Type: Certificates
  • CA Certificate: ca.crt
  • Client Certificate: nexus6.crt
  • Client Certificate Key: nexus6.key
Server list:
  • Server address: hafnarfjordur.fmarier.org
  • Port: 1194
  • Protocol: UDP
  • Custom Options: NO
Authentication/Encryption:
  • Expect TLS server certificate: YES
  • Certificate hostname check: YES
  • Remote certificate subject: server
  • Use TLS Authentication: YES
  • TLS Auth File: ta.key
  • TLS Direction: 1
  • Encryption cipher: AES-256-CBC
  • Packet authentication: SHA384 (not SHA-384)
That's it. Everything else should work with the defaults.

25 March 2015

Francois Marier: Keeping up with noisy blog aggregators using PlanetFilter

I follow a few blog aggregators (or "planets") and it's always a struggle to keep up with the amount of posts that some of these get. The best strategy I have found so far to is to filter them so that I remove the blogs I am not interested in, which is why I wrote PlanetFilter.

Other options In my opinion, the first step in starting a new free software project should be to look for a reason not to do it :) So I started by looking for another approach and by asking people around me how they dealt with the firehoses that are Planet Debian and Planet Mozilla. It seems like a lot of people choose to "randomly sample" planet feeds and only read a fraction of the posts that are sent through there. Personally however, I find there are a lot of authors whose posts I never want to miss so this option doesn't work for me. A better option that other people have suggested is to avoid subscribing to the planet feeds, but rather to subscribe to each of the author feeds separately and prune them as you go. Unfortunately, this whitelist approach is a high maintenance one since planets constantly add and remove feeds. I decided that I wanted to follow a blacklist approach instead.

PlanetFilter PlanetFilter is a local application that you can configure to fetch your favorite planets and filter the posts you see. If you get it via Debian or Ubuntu, it comes with a cronjob that looks at all configuration files in /etc/planetfilter.d/ and outputs filtered feeds in /var/cache/planetfilter/. You can either:
  • add file:///var/cache/planetfilter/planetname.xml to your local feed reader
  • serve it locally (e.g. http://localhost/planetname.xml) using a webserver, or
  • host it on a server somewhere on the Internet.
The software will fetch new posts every hour and overwrite the local copy of each feed. A basic configuration file looks like this:
[feed]
url = http://planet.debian.org/atom.xml
[blacklist]

Filters There are currently two ways of filtering posts out. The main one is by author name:
[blacklist]
authors =
  Alice Jones
  John Doe
and the other one is by title:
[blacklist]
titles =
  This week in review
  Wednesday meeting for
In both cases, if a blog entry contains one of the blacklisted authors or titles, it will be discarded from the generated feed.

Tor support Since blog updates happen asynchronously in the background, they can work very well over Tor. In order to set that up in the Debian version of planetfilter:
  1. Install the tor and polipo packages.
  2. Set the following in /etc/polipo/config:
     proxyAddress = "127.0.0.1"
     proxyPort = 8008
     allowedClients = 127.0.0.1
     allowedPorts = 1-65535
     proxyName = "localhost"
     cacheIsShared = false
     socksParentProxy = "localhost:9050"
     socksProxyType = socks5
     chunkHighMark = 67108864
     diskCacheRoot = ""
     localDocumentRoot = ""
     disableLocalInterface = true
     disableConfiguration = true
     dnsQueryIPv6 = no
     dnsUseGethostbyname = yes
     disableVia = true
     censoredHeaders = from,accept-language,x-pad,link
     censorReferer = maybe
    
  3. Tell planetfilter to use the polipo proxy by adding the following to /etc/default/planetfilter:
     export http_proxy="localhost:8008"
     export https_proxy="localhost:8008"
    

Bugs and suggestions The source code is available on repo.or.cz. I've been using this for over a month and it's been working quite well for me. If you give it a go and run into any problems, please file a bug! I'm also interested in any suggestions you may have.

1 February 2015

Francois Marier: Upgrading Lenovo ThinkPad BIOS under Linux

The Lenovo support site offers downloadable BIOS updates that can be run either from Windows or from a bootable CD. Here's how to convert the bootable CD ISO images under Linux in order to update the BIOS from a USB stick.

Checking the BIOS version Before upgrading your BIOS, you may want to look up which version of the BIOS you are currently running. To do this, install the dmidecode package:
apt-get install dmidecode
then run:
dmidecode
or alternatively, look at the following file:
cat /sys/devices/virtual/dmi/id/bios_version

Updating the BIOS using a USB stick To update without using a bootable CD, install the genisoimage package:
apt-get install genisoimage
then use geteltorito to convert the ISO you got from Lenovo:
geteltorito -o bios.img gluj19us.iso
Insert a USB stick you're willing to erase entirely and then copy the image onto it (replacing sdX with the correct device name, not partition name, for the USB stick):
dd if=bios.img of=/dev/sdX
then restart and boot from the USB stick by pressing Enter, then F12 when you see the Lenovo logo.

26 January 2015

Francois Marier: Using unattended-upgrades on Rackspace's Debian and Ubuntu servers

I install the unattended-upgrades package on almost all of my Debian and Ubuntu servers in order to ensure that security updates are automatically applied. It works quite well except that I still need to login manually to upgrade my Rackspace servers whenever a new rackspace-monitoring-agent is released because it comes from a separate repository that's not covered by unattended-upgrades. It turns out that unattended-upgrades can be configured to automatically upgrade packages outside of the standard security repositories but it's not very well documented and the few relevant answers you can find online are still using the old whitelist syntax.

Initial setup The first thing to do is to install the package if it's not already done:
apt-get install unattended-upgrades
and to answer yes to the automatic stable update question. If you don't see the question (because your debconf threshold is too low -- change it with dpkg-reconfigure debconf), you can always trigger the question manually:
dpkg-reconfigure -plow unattended-upgrades
Once you've got that installed, the configuration file you need to look at is /etc/apt/apt.conf.d/50unattended-upgrades.

Whitelist matching criteria Looking at the unattended-upgrades source code, I found the list of things that can be used to match on in the whitelist:
  • origin (shortcut: o)
  • label (shortcut: l)
  • archive (shortcut: a)
  • suite (which is the same as archive)
  • component (shortcut: c)
  • site (no shortcut)
You can find the value for each of these fields in the appropriate _Release file under /var/lib/apt/lists/. Note that the value of site is the hostname of the package repository, also present in the first part these *_Release filenames (stable.packages.cloudmonitoring.rackspace.com in the example below). In my case, I was looking at the following inside /var/lib/apt/lists/stable.packages.cloudmonitoring.rackspace.com_debian-wheezy-x86%5f64_dists_cloudmonitoring_Release:
Origin: Rackspace
Codename: cloudmonitoring
Date: Fri, 23 Jan 2015 18:58:49 UTC
Architectures: i386 amd64
Components: main
...
which means that, in addition to site, the only things I could match on were origin and component since there are no Suite or Label fields in the Release file. This is the line I ended up adding to my /etc/apt/apt.conf.d/50unattended-upgrades:
 Unattended-Upgrade::Origins-Pattern  
         // Archive or Suite based matching:
         // Note that this will silently match a different release after
         // migration to the specified archive (e.g. testing becomes the
         // new stable).
 //      "o=Debian,a=stable";
 //      "o=Debian,a=stable-updates";
 //      "o=Debian,a=proposed-updates";
         "origin=Debian,archive=stable,label=Debian-Security";
         "origin=Debian,archive=oldstable,label=Debian-Security";
+        "origin=Rackspace,component=main";
  ;

Testing To ensure that the config is right and that unattended-upgrades will pick up rackspace-monitoring-agent the next time it runs, I used:
unattended-upgrade --dry-run --debug
which should output something like this:
Initial blacklisted packages: 
Starting unattended upgrades script
Allowed origins are: ['origin=Debian,archive=stable,label=Debian-Security', 'origin=Debian,archive=oldstable,label=Debian-Security', 'origin=Rackspace,component=main']
Checking: rackspace-monitoring-agent (["<Origin component:'main' archive:'' origin:'Rackspace' label:'' site:'stable.packages.cloudmonitoring.rackspace.com' isTrusted:True>"])
pkgs that look like they should be upgraded: rackspace-monitoring-agent
...
Option --dry-run given, *not* performing real actions
Packages that are upgraded: rackspace-monitoring-agent

Making sure that automatic updates are happening In order to make sure that all of this is working and that updates are actually happening, I always install apticron on all of the servers I maintain. It runs once a day and emails me a list of packages that need to be updated and it keeps doing that until the system is fully up-to-date. The only thing missing from this is getting a reminder whenever a package update (usually the kernel) requires a reboot to take effect. That's where the update-notifier-common package comes in. Because that package will add a hook that will create the /var/run/reboot-required file whenever a kernel update has been installed, all you need to do is create a cronjob like this in /etc/cron.daily/reboot-required:
#!/bin/sh
cat /var/run/reboot-required 2> /dev/null   true
assuming of course that you are already receiving emails sent to the root user (if not, add the appropriate alias in /etc/aliases and run newaliases).

26 December 2014

Francois Marier: Making Firefox Hello work with NoScript and RequestPolicy

Firefox Hello is a new beta feature in Firefox 34 which give users the ability to do plugin-free video-conferencing without leaving the browser (using WebRTC technology). If you cannot get it to work after adding the Hello button to the toolbar, this post may help.

Preferences to check There are a few preferences to check in about:config:
  • media.peerconnection.enabled should be true
  • network.websocket.enabled should be true
  • loop.enabled should be true
  • loop.throttled should be false

NoScript If you use the popular NoScript add-on, you will need to whitelist the following hosts:
  • about:loopconversation
  • hello.firefox.com
  • loop.services.mozilla.com
  • opentok.com
  • tokbox.com

RequestPolicy If you use the less popular but equally annoying RequestPolicy add-on, then you will need to whitelist the following destination host:
  • tokbox.com
as well as the following origin to destination mappings:
  • about:loopconversation -> firefox.com
  • about:loopconversation -> mozilla.com
  • about:loopconversation -> opentok.com
  • firefox.com -> mozilla.com
  • firefox.com -> mozilla.org
  • firefox.com -> opentok.com
  • mozilla.org -> firefox.com
I have unfortunately not been able to find a way to restrict tokbox.com to a set of (source, destination) pairs. I suspect that the use of websockets confuses RequestPolicy. If you find a more restrictive policy that works, please leave a comment!

26 November 2014

Francois Marier: Hiding network disconnections using an IRC bouncer

A bouncer can be a useful tool if you rely on IRC for team communication and instant messaging. The most common use of such a server is to be permanently connected to IRC and to buffer messages while your client is disconnected. However, that's not what got me interested in this tool. I'm not looking for another place where messages accumulate and wait to be processed later. I'm much happier if people email me when I'm not around. Instead, I wanted to do to irssi what mosh did to ssh clients: transparently handle and hide temporary disconnections. Here's how I set everything up.

Server setup The first step is to install znc:
apt-get install znc
Make sure you get the 1.0 series (in jessie or trusty, not wheezy or precise) since it has much better multi-network support. Then, as a non-root user, generate a self-signed TLS certificate for it:
openssl req -x509 -sha256 -newkey rsa:2048 -keyout znc.pem -nodes -out znc.crt -days 365
and make sure you use something like irc.example.com as the subject name, that is the URL you will be connecting to from your IRC client. Then install the certificate in the right place:
mkdir ~/.znc
mv znc.pem ~/.znc/
cat znc.crt >> ~/.znc/znc.pem
Once that's done, you're ready to create a config file for znc using the znc --makeconf command, again as the same non-root user:
  • create separate znc users if you have separate nicks on different networks
  • use your nickserv password as the server password for each network
  • enable ssl
  • say no to the chansaver and nickserv plugins
Finally, open the IRC port (tcp port 6697 by default) in your firewall:
iptables -A INPUT -p tcp --dport 6697 -j ACCEPT

Client setup (irssi) On the client side, the official documentation covers a number of IRC clients, but the irssi page was quite sparse. Here's what I used for the two networks I connect to (irc.oftc.net and irc.mozilla.org):
servers = (
   
    address = "irc.example.com";
    chatnet = "OFTC";
    password = "fmarier/oftc:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
   ,
   
    address = "irc.example.com";
    chatnet = "Mozilla";
    password = "francois/mozilla:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
   
);
Of course, you'll need to copy your znc.crt file from the server into ~/.irssi/certs/znc.crt. Make sure that you're no longer authenticating with the nickserv from within irssi. That's znc's job now.

Wrapper scripts So far, this is a pretty standard znc+irssi setup. What makes it work with my workflow is the wrapper script I wrote to enable znc before starting irssi and then prompt to turn it off after exiting:
#!/bin/bash
ssh irc.example.com "pgrep znc   znc"
irssi
read -p "Terminate the bouncer? [y/N] " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]
then
  ssh irc.example.com killall -sSIGINT znc
fi
Now, instead of typing irssi to start my IRC client, I use irc. If I'm exiting irssi before commuting or because I need to reboot for a kernel update, I keep the bouncer running. At the end of the day, I say yes to killing the bouncer. That way, I don't have a backlog to go through when I wake up the next day.

20 October 2014

Francois Marier: LXC setup on Debian jessie

Here's how to setup LXC-based "chroots" on Debian jessie. While this is documented on the Debian wiki, I had to tweak a few things to get the networking to work on my machine. Start by installing (as root) the necessary packages:
apt-get install lxc libvirt-bin debootstrap

Network setup I decided to use the default /etc/lxc/default.conf configuration (no change needed here):
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.hwaddr = 00:FF:AA:xx:xx:xx
lxc.network.ipv4 = 0.0.0.0/24
but I had to make sure that the "guests" could connect to the outside world through the "host":
  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:
    net.ipv4.ip_forward=1
    
  2. and then applying it using:
    sysctl -p
    
  3. Ensure that the network bridge is automatically started on boot:
    virsh -c lxc:/// net-start default
    virsh -c lxc:/// net-autostart default
    
  4. and that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:
    -A INPUT -d 224.0.0.251 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.255 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.1 -s 192.168.122.0/24 -j ACCEPT
    
  5. and applying the rules using:
    iptables-apply
    

Creating a container Creating a new container (in /var/lib/lxc/) is simple:
sudo MIRROR=http://http.debian.net/debian lxc-create -n sid64 -t debian -- -r sid -a amd64
You can start or stop it like this:
sudo lxc-start -n sid64 -d
sudo lxc-stop -n sid64

Connecting to a guest using ssh The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:
sudo lxc-stop -n sid64
sudo lxc-start -n sid64
then install a text editor inside the container because the root image doesn't have one by default:
apt-get install vim
then paste your public key in /root/.ssh/authorized_keys. Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:
sudo lxc-ls --fancy

Fixing Perl locale errors If you see a bunch of errors like these when you start your container:
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
then log into the container as root and use:
dpkg-reconfigure locales
to enable the same locales as the ones you have configured in the host.

30 September 2014

Francois Marier: Encrypted mailing list on Debian and Ubuntu

Running an encrypted mailing list is surprisingly tricky. One of the first challenges is that you need to decide what the threat model is. Are you worried about someone compromising the list server? One of the subscribers stealing the list of subscriber email addresses? You can't just "turn on encryption", you have to think about what you're trying to defend against. I decided to use schleuder. Here's how I set it up.

Requirements What I decided to create was a mailing list where people could subscribe and receive emails encrypted to them from the list itself. In order to post, they need to send an email encrypted to the list' public key and signed using the private key of a subscriber. What the list then does is decrypt the email and encrypts it individually for each subscriber. This protects the emails while in transit, but is vulnerable to the list server itself being compromised since every list email transits through there at some point in plain text.

Installing the schleuder package The first thing to know about installing schleuder on Debian or Ubuntu is that at the moment it unfortunately depends on ruby 1.8. This means that you can only install it on Debian wheezy or Ubuntu precise: trusty and jessie won't work (until schleuder is ported to a more recent version of ruby). If you're running wheezy, you're fine, but if you're running precise, I recommend adding my ppa to your /etc/apt/sources.list to get a version of schleuder that actually lets you create a new list without throwing an error. Then, simply install this package:
apt-get install schleuder

Postfix configuration The next step is to configure your mail server (I use postfix) to handle the schleuder lists. This may be obvious but if you're like me and you're repurposing a server which hasn't had to accept incoming emails, make sure that postfix is set to the following in /etc/postfix/main.cf:
inet_interfaces = all
Then follow the instructions from /usr/share/doc/schleuder/README.Debian and finally add the following line (thanks to the wiki instructions) to /etc/postfix/main.cf:
local_recipient_maps = proxy:unix:passwd.byname $alias_maps $transport_maps

Creating a new list Once everything is set up, creating a new list is pretty easy. Simply run schleuder-newlist list@example.org and follow the instructions After creating your list, remember to update /etc/postfix/transports and run postmap /etc/postfix/transports. Then you can test it by sending an email to LISTNAME-sendkey@example.com. You should receive the list's public key.

Adding list members Once your list is created, the list admin is the only subscriber. To add more people, you can send an admin email to the list or follow these instructions to do it manually:
  1. Get the person's GPG key: gpg --recv-key KEYID
  2. Verify that the key is trusted: gpg --fingerprint KEYID
  3. Add the person to the list's /var/lib/schleuder/HOSTNAME/LISTNAME/members.conf:
    - email: francois@fmarier.org
      key_fingerprint: 8C470B2A0B31568E110D432516281F2E007C98D1
    
  4. Export the public key: gpg --export -a KEYID
  5. Paste the exported key into the list's keyring: sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ --import

30 August 2014

Francois Marier: Outsourcing your webapp maintenance to Debian

Modern web applications are much more complicated than the simple Perl CGI scripts or PHP pages of the past. They usually start with a framework and include lots of external components both on the front-end and on the back-end. Here's an example from the Node.js back-end of a real application:
$ npm list   wc -l
256
What if one of these 256 external components has a security vulnerability? How would you know and what would you do if of your direct dependencies had a hard-coded dependency on the vulnerable version? It's a real problem and of course one way to avoid this is to write everything yourself. But that's neither realistic nor desirable. However, it's not a new problem. It was solved years ago by Linux distributions for C and C++ applications. For some reason though, this learning has not propagated to the web where the standard approach seems to be to "statically link everything". What if we could build on the work done by Debian maintainers and the security team?

Case study - the Libravatar project As a way of discussing a different approach to the problem of dependency management in web applications, let me describe the decisions made by the Libravatar project.

Description Libravatar is a federated and free software alternative to the Gravatar profile photo hosting site. From a developer point of view, it's a fairly simple stack: The service is split between the master node, where you create an account and upload your avatar, and a few mirrors, which serve the photos to third-party sites. Like with Gravatar, sites wanting to display images don't have to worry about a complicated protocol. In a nutshell, all that a site needs to do is hash the user's email and add that hash to a base URL. Where the federation kicks in is that every email domain is able to specify a different base URL via an SRV record in DNS. For example, francois@debian.org hashes to 7cc352a2907216992f0f16d2af50b070 and so the full URL is:
http://cdn.libravatar.org/avatar/7cc352a2907216992f0f16d2af50b070
whereas francois@fmarier.org hashes to 0110e86fdb31486c22dd381326d99de9 and the full URL is:
http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9
due to the presence of an SRV record on fmarier.org.

Ground rules The main rules that the project follows is to:
  1. only use Python libraries that are in Debian
  2. use the versions present in the latest stable release (including backports)

Deployment using packages In addition to these rules around dependencies, we decided to treat the application as if it were going to be uploaded to Debian:
  • It includes an "upstream" Makefile which minifies CSS and JavaScript, gzips them, and compiles PO files (i.e. a "build" step).
  • The Makefile includes a test target which runs the unit tests and some lint checks (pylint, pyflakes and pep8).
  • Debian packages are produced to encode the dependencies in the standard way as well as to run various setup commands in maintainer scripts and install cron jobs.
  • The project runs its own package repository using reprepro to easily distribute these custom packages.
  • In order to update the repository and the packages installed on servers that we control, we use fabric, which is basically a fancy way to run commands over ssh.
  • Mirrors can simply add our repository to their apt sources.list and upgrade Libravatar packages at the same time as their system packages.

Results Overall, this approach has been quite successful and Libravatar has been a very low-maintenance service to run. The ground rules have however limited our choice of libraries. For example, to talk to our queuing system, we had to use the raw Python bindings to the C Gearman library instead of being able to use a nice pythonic library which wasn't in Debian squeeze at the time. There is of course always the possibility of packaging a missing library for Debian and maintaining a backport of it until the next Debian release. This wouldn't be a lot of work considering the fact that responsible bundling of a library would normally force you to follow its releases closely and keep any dependencies up to date, so you may as well share the result of that effort. But in the end, it turns out that there is a lot of Python stuff already in Debian and we haven't had to package anything new yet. Another thing that was somewhat scary, due to the number of packages that were going to get bumped to a new major version, was the upgrade from squeeze to wheezy. It turned out however that it was surprisingly easy to upgrade to wheezy's version of Django, Apache and Postgres. It may be a problem next time, but all that means is that you have to set a day aside every 2 years to bring everything up to date.

Problems The main problem we ran into is that we optimized for sysadmins and unfortunately made it harder for new developers to setup their environment. That's not very good from the point of view of welcoming new contributors as there is quite a bit of friction in preparing and testing your first patch. That's why we're looking at encoding our setup instructions into a Vagrant script so that new contributors can get started quickly. Another problem we faced is that because we use the Debian version of jQuery and minify our own JavaScript files in the build step of the Makefile, we were affected by the removal from that package of the minified version of jQuery. In our setup, there is no way to minify JavaScript files that are provided by other packages and so the only way to fix this would be to fork the package in our repository or (preferably) to work with the Debian maintainer and get it fixed globally in Debian. One thing worth noting is that while the Django project is very good at issuing backwards-compatible fixes for security issues, sometimes there is no way around disabling broken features. In practice, this means that we cannot run unattended-upgrades on our main server in case something breaks. Instead, we make use of apticron to automatically receive email reminders for any outstanding package updates. On that topic, it can occasionally take a while for security updates to be released in Debian, but this usually falls into one of two cases:
  1. You either notice because you're already tracking releases pretty well and therefore could help Debian with backporting of fixes and/or testing;
  2. or you don't notice because it has slipped through the cracks or there simply are too many potential things to keep track of, in which case the fact that it eventually gets fixed without your intervention is a huge improvement.
Finally, relying too much on Debian packaging does prevent Fedora users (a project that also makes use of Libravatar) from easily contributing mirrors. Though if we had a concrete offer, we would certainly look into creating the appropriate RPMs.

Is it realistic? It turns out that I'm not the only one who thought about this approach, which has been named "debops". The same day that my talk was announced on the DebConf website, someone emailed me saying that he had instituted the exact same rules at his company, which operates a large Django-based web application in the US and Russia. It was pretty impressive to read about a real business coming to the same conclusions and using the same approach (i.e. system libraries, deployment packages) as Libravatar. Regardless of this though, I think there is a class of applications that are particularly well-suited for the approach we've just described. If a web application is not your full-time job and you want to minimize the amount of work required to keep it running, then it's a good investment to restrict your options and leverage the work of the Debian community to simplify your maintenance burden. The second criterion I would look at is framework maturity. Given the 2-3 year release cycle of stable distributions, this approach is more likely to work with a mature framework like Django. After all, you probably wouldn't compile Apache from source, but until recently building Node.js from source was the preferred option as it was changing so quickly. While it goes against conventional wisdom, relying on system libraries is a sustainable approach you should at least consider in your next project. After all, there is a real cost in bundling and keeping up with external dependencies. This blog post is based on a talk I gave at DebConf 14: slides, video.

Next.

Previous.